skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Sarma, Anita"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Novice programming students frequently engage in help-seeking to find information and learn about programming concepts. Among the available resources, generative AI (GenAI) chatbots appear resourceful, widely accessible, and less intimidating than human tutors. Programming instructors are actively integrating these tools into classrooms. However, our understanding of how novice programming students trust GenAI chatbots-and the factors influencing their usage-remains limited. To address this gap, we investigated the learning resource selection process of 20 novice programming students tasked with studying a programming topic. We split our participants into two groups: one using ChatGPT (n=10) and the other using a human tutor via Discord (n=10). We found that participants held strong positive perceptions of ChatGPT's speed and convenience but were wary of its inconsistent accuracy, making them reluctant to rely on it for learning entirely new topics. Accordingly, they generally preferred more trustworthy resources for learning (e.g., instructors, tutors), preferring ChatGPT for low-stakes situations or more introductory and common topics. We conclude by offering guidance to instructors on integrating LLM-based chatbots into their curricula-emphasizing verification and situational use-and to developers on designing chatbots that better address novices' trust and reliability concerns. 
    more » « less
    Free, publicly-accessible full text available October 21, 2026
  2. Generative AI (genAI) tools (e.g., ChatGPT, Copilot) have become ubiquitous in software engineering (SE). As SE educators, it behooves us to understand the consequences of genAI usage among SE students and to create a holistic view of where these tools can be successfully used. Through 16 reflective interviews with SE students, we explored their academic experiences of using genAI tools to complement SE learning and implementations. We uncover the contexts where these tools are helpful and where they pose challenges, along with examining why these challenges arise and how they impact students. We validated our findings through member checking and triangulation with instructors. Our findings provide practical considerations of where and why genAI should (not) be used in the context of supporting SE students. 
    more » « less
    Free, publicly-accessible full text available April 28, 2026
  3. Large Language Model (LLM) conversational agents are increasingly used in programming education, yet we still lack insight into how novices engage with them for conceptual learning compared with human tutoring. This mixed‑methods study compared learning outcomes and interaction strategies of novices using ChatGPT or human tutors. A controlled lab study with 20 students enrolled in introductory programming courses revealed that students employ markedly different interaction strategies with AI versus human tutors: ChatGPT users relied on brief, zero‑shot prompts and received lengthy, context‑rich responses but showed minimal prompt refinement, while those working with human tutors provided more contextual information and received targeted explanations. Although students distrusted ChatGPT’s accuracy, they paradoxically preferred it for basic conceptual questions due to reduced social anxiety. We offer empirically grounded recommendations for developing AI literacy in computer science education and designing learning‑focused conversational agents that balance trust‑building with maintaining the social safety that facilitates uninhibited inquiry. 
    more » « less
    Free, publicly-accessible full text available July 7, 2026
  4. Free, publicly-accessible full text available March 25, 2026
  5. Code reviews are an ubiquitous and essential part of the software development process. They also offer a unique, at-scale opportunity for teaching developers in the context of their day-to-day development activities versus something more removed and formal, like a class. Yet there is little research on effective teaching through code reviews: focusing on learning for the author and not just changes to the code. We address this gap through a case study at Google: interviews with 14 developers revealed 12 patterns and 15 anti-patterns in code reviews that impact learning. For instance, explanatory rationale, sample solutions backed by standards, and a constructive tone facilitates learning, whereas harsh comments, excessive shallow critiques, and non-pragmatic reviewing that ignores authors' constraints hinders learning. We validated our qualitative findings through member checking, interviews with reviewers, a literature review, and a survey of 324 developers. This comprehensive study provides an empirical evidence of how social dynamics in code reviews impact learning. Based on our findings, we provide practical recommendations on how to frame constructive reviews to create a supportive learning environment. 
    more » « less
  6. Generative AI (genAI) tools (e.g., ChatGPT, Copilot) have become ubiquitous in software engineering (SE). As SE educators, it behooves us to understand the consequences of genAI usage among SE students and to create a holistic view of where these tools can be successfully used. Through 16 reflective interviews with SE students, we explored their academic experiences of using genAI tools to complement SE learning and implementations. We uncover the contexts where these tools are helpful and where they pose challenges, along with examining why these challenges arise and how they impact students. We validated our findings through member checking and triangulation with instructors. Our findings provide practical considerations of where and why genAI should (not) be used in the context of supporting SE students. 
    more » « less
    Free, publicly-accessible full text available April 27, 2026
  7. Research within sociotechnical domains, such as Software Engineering, fundamentally requires the human perspective. Nevertheless, traditional qualitative data collection methods suffer from difficulties in participant recruitment, scaling, and labor intensity. This vision paper proposes a novel approach to qualitative data collection in software engineering research by harnessing the capabilities of artificial intelligence (AI), especially large language models (LLMs) like ChatGPT and multimodal foundation models. We explore the potential of AI-generated synthetic text as an alternative source of qualitative data, discussing how LLMs can replicate human responses and behaviors in research settings. We discuss AI applications in emulating humans in interviews, focus groups, surveys, observational studies, and user evaluations. We discuss open problems and research opportunities to implement this vision. In the future, an integrated approach where both AI and human-generated data coexist will likely yield the most effective outcomes. 
    more » « less